# News Summarization Generation
Mbart Large 50 Finetuned Xlsum Summarization
mBART-large-50 is a multilingual sequence-to-sequence model supporting text summarization and generation tasks in 50 languages.
Text Generation
Transformers

M
skripsi-summarization-1234
28.54k
0
Distilbart Cnn 12 6
DistilBART-CNN-12-6 is a distilled version of the BART model, specifically designed for text summarization tasks, featuring a smaller model size and higher inference efficiency.
Text Generation
Transformers

D
Mozilla
18
2
T5 Small Abstractive Summarizer
Apache-2.0
A text summarization model based on the T5-small architecture, fine-tuned on the multi_news dataset, excelling in generating abstractive summaries
Text Generation
Transformers

T
MK-5
80
0
Text Summarization Cnn
Apache-2.0
A text summarization generation model fine-tuned based on Falconsai/text_summarization, supporting the extraction of key information from long texts to generate concise summaries.
Text Generation
Transformers

T
vmkhoa2000
125
0
Ptt5 Base Summ
MIT
A Brazilian Portuguese abstract summarization generation model fine-tuned based on PTT5, supporting concise summary generation from texts such as news articles.
Text Generation
Transformers Other

P
recogna-nlp
853
0
Gpt2 Finetuned Cnn Summarization V2
MIT
Text summarization generation model fine-tuned based on GPT-2
Text Generation
Transformers

G
gavin124
266
7
Gpt2 Finetuned Cnn Summarization V1
MIT
Text summarization generation model fine-tuned based on GPT-2
Text Generation
Transformers

G
gavin124
24
1
Ptt5 Base Summ Cstnews
MIT
A Brazilian Portuguese text abstract summarization model fine-tuned based on PTT5
Text Generation
Transformers Other

P
recogna-nlp
47
8
Ptt5 Base Summ Temario
MIT
A model fine-tuned based on PTT5 for generating abstractive summaries of Brazilian Portuguese texts.
Text Generation
Transformers Other

P
recogna-nlp
159
1
Randeng Pegasus 523M Chinese
A Chinese version of the PAGASUS-large model specialized in text summarization tasks, trained on the PEGASUS architecture with optimizations for Chinese tokenization.
Text Generation
Transformers Chinese

R
IDEA-CCNL
329
12
Indicbart XLSum
IndicBART-XLSum is a sequence-to-sequence pre-trained model based on the multilingual independent script IndicBART, focusing on Indian languages.
Large Language Model
Transformers Other

I
ai4bharat
290
4
Mt5 Small Sum De En V1
A bilingual summarization model based on multilingual T5, supporting English and German text summarization tasks
Text Generation
Transformers Supports Multiple Languages

M
deutsche-telekom
1,210
8
Distilbart Cnn 12 6
Apache-2.0
DistilBART is a distilled version of the BART model, specifically optimized for text summarization tasks, significantly improving inference speed while maintaining high performance.
Text Generation English
D
sshleifer
783.96k
278
Mt5 Base Wikinewssum English
Apache-2.0
An English summarization generation model fine-tuned based on google/mt5-base, excelling at extracting key information from text to generate concise summaries.
Text Generation
Transformers

M
airKlizz
16
0
Mt5 Small Sum De En V2
A bilingual summarization model based on the multilingual T5 model, supporting German and English text summarization tasks
Text Generation
Transformers Supports Multiple Languages

M
T-Systems-onsite
227
14
Bart Base Cnn R2 18.7 D23 Hybrid
Apache-2.0
This is a pruned and optimized BART-base model, specifically fine-tuned on the CNN/DailyMail dataset for summarization tasks.
Text Generation
Transformers English

B
echarlaix
18
0
T5 Base Fr Sum Cnndm
A French text summarization model based on T5 Transformer, fine-tuned specifically for French generative text summarization tasks.
Text Generation
Transformers French

T
plguillou
3,638
22
Featured Recommended AI Models